job seeker
The Hiring Market Is Truly Terrible Right Now. Job Seekers Are Starting to Do Something Unthinkable to Get Hired.
The Industry I Offered to Take Less Money to Get Hired. In a rough hiring market, a growing number of younger, female job seekers have begun "lowballing" their salary expectations. I know this because I did it myself. If it feels impossible to get hired in today's job market, it's because it is. Greenhouse, a hiring software firm, estimates that when someone applies for a job, they now have a 0.4 percent chance of being hired--meaning you have a better chance of getting into Harvard than securing employment.
Invisible Filters: Cultural Bias in Hiring Evaluations Using Large Language Models
Rao, Pooja S. B., Venkatesan, Laxminarayen Nagarajan, Cherubini, Mauro, Jayagopi, Dinesh Babu
Artificial Intelligence (AI) is increasingly used in hiring, with large language models (LLMs) having the potential to influence or even make hiring decisions. However, this raises pressing concerns about bias, fairness, and trust, particularly across diverse cultural contexts. Despite their growing role, few studies have systematically examined the potential biases in AI-driven hiring evaluation across cultures. In this study, we conduct a systematic analysis of how LLMs assess job interviews across cultural and identity dimensions. Using two datasets of interview transcripts, 100 from UK and 100 from Indian job seekers, we first examine cross-cultural differences in LLM-generated scores for hirability and related traits. Indian transcripts receive consistently lower scores than UK transcripts, even when they were anonymized, with disparities linked to linguistic features such as sentence complexity and lexical diversity. We then perform controlled identity substitutions (varying names by gender, caste, and region) within the Indian dataset to test for name-based bias. These substitutions do not yield statistically significant effects, indicating that names alone, when isolated from other contextual signals, may not influence LLM evaluations. Our findings underscore the importance of evaluating both linguistic and social dimensions in LLM-driven evaluations and highlight the need for culturally sensitive design and accountability in AI-assisted hiring.
- Europe > United Kingdom (0.28)
- Europe > Croatia > Dubrovnik-Neretva County > Dubrovnik (0.04)
- Asia > Singapore (0.04)
- (8 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
Off-Policy Evaluation and Learning for Matching Markets
Hayashi, Yudai, Goda, Shuhei, Saito, Yuta
Matching users based on mutual preferences is a fundamental aspect of services driven by reciprocal recommendations, such as job search and dating applications. Although A/B tests remain the gold standard for evaluating new policies in recommender systems for matching markets, it is costly and impractical for frequent policy updates. Off-Policy Evaluation (OPE) thus plays a crucial role by enabling the evaluation of recommendation policies using only offline logged data naturally collected on the platform. However, unlike conventional recommendation settings, the large scale and bidirectional nature of user interactions in matching platforms introduce variance issues and exacerbate reward sparsity, making standard OPE methods unreliable. To address these challenges and facilitate effective offline evaluation, we propose novel OPE estimators, \textit{DiPS} and \textit{DPR}, specifically designed for matching markets. Our methods combine elements of the Direct Method (DM), Inverse Propensity Score (IPS), and Doubly Robust (DR) estimators while incorporating intermediate labels, such as initial engagement signals, to achieve better bias-variance control in matching markets. Theoretically, we derive the bias and variance of the proposed estimators and demonstrate their advantages over conventional methods. Furthermore, we show that these estimators can be seamlessly extended to offline policy learning methods for improving recommendation policies for making more matches. We empirically evaluate our methods through experiments on both synthetic data and A/B testing logs from a real job-matching platform. The empirical results highlight the superiority of our approach over existing methods in off-policy evaluation and learning tasks for a variety of configurations.
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.14)
- Europe > Czechia > Prague (0.05)
- North America > United States > New York > New York County > New York City (0.05)
- (9 more...)
Let's Get You Hired: A Job Seeker's Perspective on Multi-Agent Recruitment Systems for Explaining Hiring Decisions
Bhattacharya, Aditya, Verbert, Katrien
During job recruitment, traditional applicant selection methods often lack transparency. Candidates are rarely given sufficient justifications for recruiting decisions, whether they are made manually by human recruiters or through the use of black-box Applicant Tracking Systems (ATS). To address this problem, our work introduces a multi-agent AI system that uses Large Language Models (LLMs) to guide job seekers during the recruitment process. Using an iterative user-centric design approach, we first conducted a two-phased exploratory study with four active job seekers to inform the design and development of the system. Subsequently, we conducted an in-depth, qualitative user study with 20 active job seekers through individual one-to-one interviews to evaluate the developed prototype. The results of our evaluation demonstrate that participants perceived our multi-agent recruitment system as significantly more actionable, trustworthy, and fair compared to traditional methods. Our study further helped us uncover in-depth insights into factors contributing to these perceived user experiences. Drawing from these insights, we offer broader design implications for building user-aligned, multi-agent explainable AI systems across diverse domains.
- Europe > Belgium > Flanders > Flemish Brabant > Leuven (0.76)
- North America > United States > New York > New York County > New York City (0.14)
- Asia > South Korea > Busan > Busan (0.05)
- (19 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Questionnaire & Opinion Survey (1.00)
- Information Technology > Security & Privacy (1.00)
- Education (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (0.46)
DISCO: A Hierarchical Disentangled Cognitive Diagnosis Framework for Interpretable Job Recommendation
Yu, Xiaoshan, Qin, Chuan, Zhang, Qi, Zhu, Chen, Ma, Haiping, Zhang, Xingyi, Zhu, Hengshu
The rapid development of online recruitment platforms has created unprecedented opportunities for job seekers while concurrently posing the significant challenge of quickly and accurately pinpointing positions that align with their skills and preferences. Job recommendation systems have significantly alleviated the extensive search burden for job seekers by optimizing user engagement metrics, such as clicks and applications, thus achieving notable success. In recent years, a substantial amount of research has been devoted to developing effective job recommendation models, primarily focusing on text-matching based and behavior modeling based methods. While these approaches have realized impressive outcomes, it is imperative to note that research on the explainability of recruitment recommendations remains profoundly unexplored. To this end, in this paper, we propose DISCO, a hierarchical Disentanglement based Cognitive diagnosis framework, aimed at flexibly accommodating the underlying representation learning model for effective and interpretable job recommendations. Specifically, we first design a hierarchical representation disentangling module to explicitly mine the hierarchical skill-related factors implied in hidden representations of job seekers and jobs. Subsequently, we propose level-aware association modeling to enhance information communication and robust representation learning both inter- and intra-level, which consists of the interlevel knowledge influence module and the level-wise contrastive learning. Finally, we devise an interaction diagnosis module incorporating a neural diagnosis function for effectively modeling the multi-level recruitment interaction process between job seekers and jobs, which introduces the cognitive measurement theory.
Comparative Analysis of Encoder-Based NER and Large Language Models for Skill Extraction from Russian Job Vacancies
Matkin, Nikita, Smirnov, Aleksei, Usanin, Mikhail, Ivanov, Egor, Sobyanin, Kirill, Paklina, Sofiia, Parshakov, Petr
The labor market is undergoing rapid changes, with increasing demands on job seekers and a surge in job openings. Identifying essential skills and competencies from job descriptions is challenging due to varying employer requirements and the omission of key skills. This study addresses these challenges by comparing traditional Named Entity Recognition (NER) methods based on encoders with Large Language Models (LLMs) for extracting skills from Russian job vacancies. Using a labeled dataset of 4,000 job vacancies for training and 1,472 for testing, the performance of both approaches is evaluated. Results indicate that traditional NER models, especially DeepPavlov RuBERT NER tuned, outperform LLMs across various metrics including accuracy, precision, recall, and inference time. The findings suggest that traditional NER models provide more effective and efficient solutions for skill extraction, enhancing job requirement clarity and aiding job seekers in aligning their qualifications with employer expectations. This research contributes to the field of natural language processing (NLP) and its application in the labor market, particularly in non-English contexts.
- Information Technology > Artificial Intelligence > Natural Language > Text Processing (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.69)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.51)
Nasdaq-100 Companies' Hiring Insights: A Topic-based Classification Approach to the Labor Market
Jafari, Seyed Mohammad Ali, Chitsaz, Ehsan
The emergence of new and disruptive technologies makes the economy and labor market more unstable. To overcome this kind of uncertainty and to make the labor market more comprehensible, we must employ labor market intelligence techniques, which are predominantly based on data analysis. Companies use job posting sites to advertise their job vacancies, known as online job vacancies (OJVs). LinkedIn is one of the most utilized websites for matching the supply and demand sides of the labor market; companies post their job vacancies on their job pages, and LinkedIn recommends these jobs to job seekers who are likely to be interested. However, with the vast number of online job vacancies, it becomes challenging to discern overarching trends in the labor market. In this paper, we propose a data mining-based approach for job classification in the modern online labor market. We employed structural topic modeling as our methodology and used the NASDAQ-100 indexed companies' online job vacancies on LinkedIn as the input data. We discover that among all 13 job categories, Marketing, Branding, and Sales; Software Engineering; Hardware Engineering; Industrial Engineering; and Project Management are the most frequently posted job classifications. This study aims to provide a clearer understanding of job market trends, enabling stakeholders to make informed decisions in a rapidly evolving employment landscape.
- Europe > Latvia > Riga Municipality > Riga (0.04)
- Asia > Middle East > Iran > Tehran Province > Tehran (0.04)
- North America > United States > Idaho > Ada County > Boise (0.04)
- (5 more...)
- Banking & Finance > Trading (1.00)
- Banking & Finance > Economy (1.00)
Facilitating Multi-Role and Multi-Behavior Collaboration of Large Language Models for Online Job Seeking and Recruiting
Sun, Hongda, Lin, Hongzhan, Yan, Haiyu, Zhu, Chen, Song, Yang, Gao, Xin, Shang, Shuo, Yan, Rui
The emergence of online recruitment services has revolutionized the traditional landscape of job seeking and recruitment, necessitating the development of high-quality industrial applications to improve person-job fitting. Existing methods generally rely on modeling the latent semantics of resumes and job descriptions and learning a matching function between them. Inspired by the powerful role-playing capabilities of Large Language Models (LLMs), we propose to introduce a mock interview process between LLM-played interviewers and candidates. The mock interview conversations can provide additional evidence for candidate evaluation, thereby augmenting traditional person-job fitting based solely on resumes and job descriptions. However, characterizing these two roles in online recruitment still presents several challenges, such as developing the skills to raise interview questions, formulating appropriate answers, and evaluating two-sided fitness. To this end, we propose MockLLM, a novel applicable framework that divides the person-job matching process into two modules: mock interview generation and two-sided evaluation in handshake protocol, jointly enhancing their performance through collaborative behaviors between interviewers and candidates. We design a role-playing framework as a multi-role and multi-behavior paradigm to enable a single LLM agent to effectively behave with multiple functions for both parties. Moreover, we propose reflection memory generation and dynamic prompt modification techniques to refine the behaviors of both sides, enabling continuous optimization of the augmented additional evidence. Extensive experimental results show that MockLLM can achieve the best performance on person-job matching accompanied by high mock interview quality, envisioning its emerging application in real online recruitment in the future.
- Asia > China (0.04)
- North America > United States > Hawaii (0.04)
- Research Report (0.84)
- Personal > Interview (0.68)
Recruiters Are Going Analog to Fight the AI Application Overload
So far, over 3,000 people have applied to one open data science vacancy at a US health tech company this year. The top candidates are given a lengthy and difficult task assessment, which very few pass, says a recruiter at the company, who asked to remain anonymous because they are not authorized to speak publicly. The recruiter says they believe some who did pass may have used artificial intelligence to solve the problem. There was odd wording in some, the recruiter explains, others disclosed using AI, and in one case when the person moved on to the next interview, they couldn't answer questions about the task. "Not only have they wasted their time, but they wasted my time," says the recruiter.
LinkedIn's New AI Chatbot Wants to Help You Find Your Next Job
The tools use generative AI to advise people whether they may be a good fit for open jobs listed on the platform and how to better tailor their profiles to stand out. The new AI features are powered by OpenAI's technology and are indicated by a sparkle emoji under job listings on LinkedIn. Clicking on it opens a chat window where a person can type queries about a job or select prewritten questions such as "Am I a good fit for this role?" Answers are provided in the form of brief bullet points sourced from scraping company profiles and other information on LinkedIn. The automated helper can also answer more specific queries about a job posting, company benefits or culture, or the industry a job is part of.